312 research outputs found

    Incorporating independent component analysis and multi-temporal sar techniques to retrieve rapid postseismic deformation

    Get PDF
    This study investigates the ongoing postseismic deformation induced by two moderate mainshocks of Mw 6.1 and Mw 6.0, 2017 Hojedk earthquake in Southern Iran. Available Sentinel-1 TOPS C-band Synthetic Aperture Radar (SAR) images over about one year after the earthquakes are used to analyze the postseismic activities. An adaptive method incorporating Independent Component Analysis (ICA) and multi-Temporal Small BAseline Subset (SBAS) Interferometric SAR (InSAR) techniques is proposed and implemented to recover the rapid deformation. This method is applied to the series of interferograms generated in a fully constructed SBAS network to retrieve the postseismic deformation signal. ICA algorithm uses a linear transformation to decompose the input mixed signal to its source components, which are non-Gaussian and mutually independent. This analysis allows extracting the low rate postseismic deformation signal from a mixture of interferometric phase components. The independent sources recovered from the multi-Temporal InSAR dataset are then analyzed using a group clustering test aiming to identify and enhance the undescribed deformation signal. Analysis of the processed interferograms indicates a promising performance of the proposed method in determining tectonic deformation. The proposed method works well, mainly when the tectonic signal is dominated by the undesired signals, including atmosphere or orbital/unwrapping noise that counts as temporally uncorrelated components.In contrast to the standard SBAS time series method, the ICA-based time series analysis estimates the cumulative deformation with no prior assumption about elevation dependence of the interferometric phase or temporal nature of the tectonic signal. Application of the method to 433 Sentinel-1 pairs within the dataset reports two distinct deformation patches corresponding to the postseismic deformation. Besides the performance of the ICA-based analysis, the proposed method automatically detects rapid or low rate tectonic processes in unfavorable conditions. © Authors 2020. All rights reserved

    Electronic Shell Structure of Nanoscale Superconductors

    Full text link
    Motivated by recent experiments on Al nanoparticles, we have studied the effects of fixed electron number and small size in nanoscale superconductors, by applying the canonical BCS theory for the attractive Hubbard model in two and three dimensions. A negative ``gap'' in particles with an odd number of electrons as observed in the experiments is obtained in our canonical scheme. For particles with an even number of electrons, the energy gap exhibits shell structure as a function of electron density or system size in the weak-coupling regime: the gap is particularly large for ``magic numbers'' of electrons for a given system size or of atoms for a fixed electron density. The grand canonical BCS method essentially misses this feature. Possible experimental methods for observing such shell effects are discussed.Comment: 5 pages, 5 figure

    Deep domain adaptation by weighted entropy minimization for the classification of aerial images

    Get PDF
    Fully convolutional neural networks (FCN) are successfully used for the automated pixel-wise classification of aerial images and possibly additional data. However, they require many labelled training samples to perform well. One approach addressing this issue is semi-supervised domain adaptation (SSDA). Here, labelled training samples from a source domain and unlabelled samples from a target domain are used jointly to obtain a target domain classifier, without requiring any labelled samples from the target domain. In this paper, a two-step approach for SSDA is proposed. The first step corresponds to a supervised training on the source domain, making use of strong data augmentation to increase the initial performance on the target domain. Secondly, the model is adapted by entropy minimization using a novel weighting strategy. The approach is evaluated on the basis of five domains, corresponding to five cities. Several training variants and adaptation scenarios are tested, indicating that proper data augmentation can already improve the initial target domain performance significantly resulting in an average overall accuracy of 77.5%. The weighted entropy minimization improves the overall accuracy on the target domains in 19 out of 20 scenarios on average by 1.8%. In all experiments a novel FCN architecture is used that yields results comparable to those of the best-performing models on the ISPRS labelling challenge while having an order of magnitude fewer parameters than commonly used FCNs. © 2020 Copernicus GmbH. All rights reserved

    Multi-scale building maps from aerial imagery

    Get PDF
    Nowadays, the extraction of buildings from aerial imagery is mainly done through deep convolutional neural networks (DCNNs). Buildings are predicted as binary pixel masks and then regularized to polygons. Restricted by nearby occlusions (such as trees), building eaves, and sometimes imperfect imagery data, these results can hardly be used to generate detailed building footprints comparable to authoritative data. Therefore, most products can only be used for mapping at smaller map scale. The level of detail that should be retained is normally determined by the scale parameter in the regularization algorithm. However, this scale information has been already defined in cartography. From existing maps of different scales, neural network can be used to learn such scale information implicitly. The network can perform generalization directly on the mask output and generate multi-scale building maps at once. In this work, a pipeline method is proposed, which can generate multi-scale building maps from aerial imagery directly. We used a land cover classification model to provide the building blobs. With the models pre-trained for cartographic building generalization, blobs were generalized to three target map scales, 1:10,000, 1:15,000, and 1:25,000. After post-processing with vectorization and regularization, multi-scale building maps were generated and then compared with existing authoritative building data qualitatively and quantitatively. In addition, change detection was performed and suggestions for unmapped buildings could be provided at a desired map scale. . © 2020 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives

    Investigations on skip-connections with an additional cosine similarity loss for land cover classification

    Get PDF
    Pixel-based land cover classification of aerial images is a standard task in remote sensing, whose goal is to identify the physical material of the earth's surface. Recently, most of the well-performing methods rely on encoder-decoder structure based convolutional neural networks (CNN). In the encoder part, many successive convolution and pooling operations are applied to obtain features at a lower spatial resolution, and in the decoder part these features are up-sampled gradually and layer by layer, in order to make predictions in the original spatial resolution. However, the loss of spatial resolution caused by pooling affects the final classification performance negatively, which is compensated by skip-connections between corresponding features in the encoder and the decoder. The most popular ways to combine features are element-wise addition of feature maps and 1x1 convolution. In this work, we investigate skip-connections. We argue that not every skip-connections are equally important. Therefore, we conducted experiments designed to find out which skip-connections are important. Moreover, we propose a new cosine similarity loss function to utilize the relationship of the features of the pixels belonging to the same category inside one mini-batch, i.e.These features should be close in feature space. Our experiments show that the new cosine similarity loss does help the classification. We evaluated our methods using the Vaihingen and Potsdam dataset of the ISPRS 2D semantic labelling challenge and achieved an overall accuracy of 91.1% for both test sites. © Authors 2020. All rights reserved
    corecore